Coarse-grained Parallelism in Natural Language Understanding: Parsing as Message Passing
نویسندگان
چکیده
A framework for concurrent, object-oriented natural language parsing is introduced. The underlying grammar model is fully lexicalized, headdriven, dependency-oriented, and structured along multiple inheritance hierarchies. The computation model relies upon the actor paradigm, with concurrency entering through asynchronous message passing. Protocols for establishing basic dependency relations and for coping with structural ambiguities and text-level anaphora are considered.
منابع مشابه
An improved joint model: POS tagging and dependency parsing
Dependency parsing is a way of syntactic parsing and a natural language that automatically analyzes the dependency structure of sentences, and the input for each sentence creates a dependency graph. Part-Of-Speech (POS) tagging is a prerequisite for dependency parsing. Generally, dependency parsers do the POS tagging task along with dependency parsing in a pipeline mode. Unfortunately, in pipel...
متن کاملSupporting Coarse and Fine Grain Parallelism in an Extension of ML
We have built an extension of Standard ML aimed at multicomputer platforms with distributed memories. The resulting language, paraML, differs from other extensions by including and differentiating both coarse-grained and fine-grained parallelism. The basis for coarse-grained parallelism in paraML is process creation where there is no sharing of data, with communication between processes via asy...
متن کاملTrading off Completeness for Efficiency --- The \textsc{ParseTalk} Performance Grammar Approach to Real-World Text Parsing
We argue for a performance-based design of natural language grammars and their associated parsers in order to meet the constraints posed by real-world natural language understanding. This approach incorporates declarative and procedural knowledge about language and language use within an object-oriented specification framework. We discuss several message passing protocols for real-world text pa...
متن کاملExtended Parallelism Models For Optimization On Massively Parallel Computers
1. Abstract Single-level parallel optimization approaches, those in which either the simulation code executes in parallel or the optimization algorithm invokes multiple simultaneous single-processor analyses, have been investigated previously and have been shown to be effective in reducing the time required to compute optimal solutions. However, these approaches have clear performance limitatio...
متن کاملCombining Message-passing and Directives in Parallel Applications
Developers of parallel applications can be faced with the problem of combining the two dominant models for parallel processing—distributed-memory and shared-memory parallelism—within one source code. In this article we discuss why it is useful to combine these two programming methodologies, both of which are supported on most high-performance computers, and some of the lessons we learned in wor...
متن کامل